Selective Greedy Equivalence Search: Finding Optimal Bayesian Networks Using a Polynomial Number of Score Evaluations
نویسندگان
چکیده
We introduce Selective Greedy Equivalence Search (SGES), a restricted version of Greedy Equivalence Search (GES). SGES retains the asymptotic correctness of GES but, unlike GES, has polynomial performance guarantees. In particular, we show that when data are sampled independently from a distribution that is perfect with respect to a DAG G defined over the observable variables then, in the limit of large data, SGES will identify G’s equivalence class after a number of score evaluations that is (1) polynomial in the number of nodes and (2) exponential in various complexity measures including maximum-number-of-parents, maximum-cliquesize, and a new measure called v-width that is at least as small as—and potentially much smaller than—the other two. More generally, we show that for any hereditary and equivalenceinvariant property Π known to hold in G, we retain the large-sample optimality guarantees of GES even if we ignore any GES deletion operator during the backward phase that results in a state for which Π does not hold in the commondescendants subgraph.
منابع مشابه
Learning Equivalence Classes of Bayesian Networks Structures
Approaches to learning Bayesian networks from data typically combine a scoring func tion with a heuristic search procedure. Given a Bayesian network structure, many of the scoring functions derived in the literature re turn a score for the entire equivalence class to which the structure belongs. When using such a scoring function, it is appropriate for the heuristic search algorithm to search...
متن کاملLearning Optimal Augmented Bayes Networks
Naive Bayes is a simple Bayesian classifier with strong independence assumptions among the attributes. This classifier, despite its strong independence assumptions, often performs well in practice. It is believed that relaxing the independence assumptions of a naive Bayes classifier may improve the classification accuracy of the resulting structure. While finding an optimal unconstrained Bayesi...
متن کاملOn Local Optima in Learning Bayesian Networks
This paper proposes and evaluates the k-greedy equivalence search algorithm (KES) for learning Bayesian networks (BNs) from complete data. The main characteristic of KES is that it allows a trade-off between greediness and randomness, thus exploring different good local optima when run repeatedly. When greediness is set at maximum, KES corresponds to the greedy equivalence search algorithm (GES...
متن کاملLearning Equivalence Classes of Bayesian Network Structures
Two Bayesian-network structures are said to be equivalent if the set of distributions that can be represented with one of those structures is identical to the set of distributions that can be represented with the other. Many scoring criteria that are used to learn Bayesiannetwork structures from data are score equivalent; that is, these criteria do not distinguish among networks that are equiva...
متن کاملBayesian and Decision Models in AI 2009-2010 Assignment II – Learning Bayesian Networks
Search-and-score algorithms search for a Bayesian network structure that fits the data best (in some sense). They start with an initial network structure (often a graph without arcs or a complete graph), and then traverse the search space of network structures by in each step either removing an arc, adding an arc, or reversing an arc. Read again the paper by Castello and Kočka [1] for a good ov...
متن کامل